What can ML learn from the proof of the Kolmogorov-Arnold theorem
Michael Freedman (Harvard)
10-Sep-2024, 20:45-22:00 (15 months ago)
Abstract: The Kolmogorov-Arnold representation theorem shows that even very shallow, non-linear neural nets can express general continuous multivariate functions. I will begin by giving a proof. The theorem has often been regarded as "irrelevant" to machine learning because of the unrealistic precision required in its representation of Real numbers. I agree with this criticism but will present another path to ML-relevancy - not of the statement but of the proof.
quantum computing and informationMathematicsPhysics
Audience: researchers in the topic
Comments: Passcode: 657361
Mathematical Picture Language Seminar
| Organizer: | Arthur Jaffe* |
| *contact for this listing |
Export talk to
